Compositing the rendering

Up till now, we've been enjoying the default compositing of the image offered in the component. In this tutorial, we will see how we can create our own composition to totally change the way our image is rendered.

Our aim will be to have an environment map around our sphere. We will also modify the program rendering the mesh to reflect this environment.

Starting with the modifications

First, we need to adapt what we currently have. Let's start by changing the texture :

tex->setResourcePath("yokohamaNight.dds") ; tex->load() ;

The resource we load is now different, it is a cube map. As such, it is perfect to represent an environment. From the Texture's perspective, nothing has changed, we load it exactly the same way as before.
However, we need to interpret it differently within the HLSL program :

sources.setVertexMemory ( R"eos( cbuffer PassBuffer : register(b0) { matrix view ; matrix proj ; float3 camPos ; } struct VertexInput { float4 position : POSITION ; float3 normal : NORMAL ; matrix world : WORLDMAT ; } ; struct PixelInput { float4 position : SV_POSITION ; float3 normal : NORMAL ; float3 camDir : CAMDIR ; } ; PixelInput main (VertexInput input) { PixelInput result ; matrix mvp = mul(input.world, mul(view, proj)) ; result.position = mul(input.position, mvp) ; result.normal = input.normal ; result.camDir = normalize(mul(input.position, input.world) - camPos) ; return result ; } )eos" ) ; sources.setPixelMemory ( R"eos( struct PixelInput { float4 position : SV_POSITION ; float3 normal : NORMAL ; float3 camDir : CAMDIR ; } ; TextureCube tex : register(t0) ; SamplerState customSampler : register(s0) ; float4 main (PixelInput input) : SV_TARGET { float3 sampleDir = reflect(input.camDir, normalize(input.normal)) ; return tex.Sample(customSampler, sampleDir) ; } )eos" ) ;

Let's start from the pixel stage, as its required changes impact directly the vertex stage.
First, the Texture2D is now a TextureCube. Cube maps have to be addressed differently within HLSL, because the way to sample from then is not using UVs, but using a 3D direction.
As a result, we will reflect the camera direction on the sphere's surface using the normal, and sample the environment map from this resulting direction.

Which is why we need to alter the vertex stage : it needs to provide the camera direction.
First, the constant buffer features the camera position.
It also receives the vertex normal from the mesh, and not the texture coordinate.
This enables to compute the camera direction from the vertex, and feed it along with the vertex normal to the pixel stage.

And of course, we need to slightly change what the Shader feeds to the Program :

slot = cBuffer->addPassMemorySlot() ; slot->setAsCameraPosition() ;

Like the function name implies, we add a slot feeding the camera position during the pass, as the HLSL constant buffer now requires.

Launching the program in this state will already enable us to witness the environment map :

Sphere with reflections
Reflections !

However, let's face it, the sphere feels out of place. The green environment doesn't really help... How could we change that ?

Introducing the compositor

The heart of the image composition is the Compositor. It provides full control over the way an image is composed is possible. There is one available by default, and it's been this one the rendering uses right now.
Let's dig without waiting inside the API :

#include <NilkinsGraphics/Compositor/Compositor.h> #include <NilkinsGraphics/Compositor/CompositorManager.h> #include <NilkinsGraphics/Compositor/CompositorNode.h> #include <NilkinsGraphics/Compositor/TargetOperations.h> #include <NilkinsGraphics/Passes/ClearTargetsPass.h> #include <NilkinsGraphics/Passes/PostProcessPass.h> #include <NilkinsGraphics/Passes/RenderScenePass.h>

In there, we include everything we need for the compositor itself, along with the type of passes we will use. Everything will be explained as we go over the code :

nkGraphics::Compositor* compositor = nkGraphics::CompositorManager::getInstance()->createOrRetrieve("compositor") ; nkGraphics::CompositorNode* node = compositor->addNode() ; nkGraphics::TargetOperations* targetOp = node->addOperations() ; targetOp->setToBackBuffer(true, 0) ; targetOp->setToChainDepthBuffer(true) ; nkGraphics::ClearTargetsPass* clearPass = targetOp->addClearTargetsPass() ; nkGraphics::RenderScenePass* scenePass = targetOp->addRenderScenePass() ; nkGraphics::PostProcessPass* postProcessPass = targetOp->addPostProcessPass() ; postProcessPass->setBackProcess(true) ; postProcessPass->setProcessShader(envShader) ; compositor->load() ;

First, we create the Compositor as usual, through the manager.

A Compositor is formed by one or more CompositorNode. Those nodes represent operation sets you want to have, and can be toggled on and off easily.
In this case, having only one node inside will be sufficient.

The CompositorNode is composed of TargetOperations. Their aim is to specify which targets are to be altered with the set of passes they are populated with.
In this case, we wish to render to the "back buffer", which is the context's surface in the window. The context also offers a dedicated depth buffer we will use.

Finally, TargetOperation is formed of Pass of different kind.
For our rendering, we want to clear the back and depth buffer first.
Then, we render the scene, aka our sphere currently set within the first rendering queue.
Finally, we request a post process pass. We request it to be a back process, so that it renders only to parts where no mesh is present, and set the shader it will use to process the image.

Final step, we load the compositor for it to prepare and take into account all our changes.

However, we have a missing piece : what shader should the post process use ?

Using a shader with the post process pass

The post process pass is specific in what it does. It will render a square to the target, enabling to act directly on the full image bound.

As such, the program and shader we will need will have some specificities. The code we are about to see should be put before the compositor creation :

nkGraphics::Program* envProgram = nkGraphics::ProgramManager::getInstance()->createOrRetrieve("envProgram") ; nkGraphics::ProgramSourcesHolder envSources ; envSources.setVertexMemory ( R"eos( cbuffer constants { float4 camDir [4] ; } struct VertexInput { float4 position : POSITION ; uint vertexId : SV_VertexID ; } ; struct PixelInput { float4 position : SV_POSITION ; float4 camDir : CAMDIR ; } ; PixelInput main (VertexInput input) { PixelInput result ; result.position = input.position ; result.camDir = camDir[input.vertexId] ; return result ; } )eos" ) ; envSources.setPixelMemory ( R"eos( struct PixelInput { float4 position : SV_POSITION ; float4 camDir : CAMDIR ; } ; TextureCube envMap : register(t0) ; SamplerState customSampler : register(s0) ; float4 main (PixelInput input) : SV_TARGET { return envMap.Sample(customSampler, normalize(input.camDir)) ; } )eos" ) ; envProgram->setFromMemory(envSources) ; envProgram->load() ;

The vertex stage takes advantage of the knowledge that we will get a square mapped onto the screen. Each vertex will be a corner of our image.
As such, the constant buffer will expect 4 directions for the camera. Those will correspond to the directions at the 4 corners of the view.
The vertex input takes the position, and the vertex ID, that will be useful to index within the camera directions array.
The vertexID semantic is given within the HLSL program by DirectX. Its value is the index of the vertex being processed.
The post process mesh has been defined so that the vertex indices allow to index directly the array given by the camera directions slot we will define later. Aka, the corners are defined in the same order.
The pixel input will take the final position as usual, and the camera position for a given vertex. This will enable it to be interpolated by the GPU between pixels.
In the function body, the position is set directly, as the square is defined in such a way that nothing specific is required. The camera direction is the array entry for the vertex index.

The pixel stage uses given direction to sample the environment map.

Of course, this Program needs a Shader to be used :

nkGraphics::Shader* envShader = nkGraphics::ShaderManager::getInstance()->createOrRetrieve("envShader") ; envShader->setAttachedShaderProgram(envProgram) ; nkGraphics::ConstantBuffer* cBuffer = envShader->addConstantBuffer(0) ; nkGraphics::ShaderPassMemorySlot* slot = cBuffer->addPassMemorySlot() ; slot->setAsCamCornersWorld() ; envShader->addTexture(tex, 0) ; envShader->addSampler(sampler, 0) ; envShader->load() ;

Past the creation, we assign the program and prepare the constant buffer, texture and sampler.
The unknown bit is the slot : it will feed the camera corner directions to the program, in world space. The camera having 4 corners, this is precisely what we get in the HLSL code as an array of float4.

Finalization

Now that we have everything set, all shaders required and the compositor, we have a last step to do : we need to specify that we want the compositor to be used when rendering.
For that :

#include <NilkinsGraphics/RenderContexts/RenderContext.h>

So that we can work with it :

context->setCompositor(compositor) ;

The RenderContext can have a Compositor assigned. When rendering the context, the attached compositor will be used. If no compositor is given, the default one set within the CompositorManager is used.
While it can be overriden, the default compositor in the nkGraphics component will :

  1. Clear the colour and depth targets
  2. Render the scene, currently for all render queues attached to an index

Creating the rendering we had up till now. Now however, we altered the Compositor that should be used by the context. As such when launching the program, we should get something different :

Sphere with the environment
Our sphere now fits better in its environment !

Quick recap

Now all secrets about image composition are unveiled. In short, the process is :

  1. Prepare whatever shaders, programs, resources that will be needed
  2. Create the Compositor, populate it with CompositorNode
  3. Populate the CompositorNode with the TargetOperations they need
  4. Setup and populate the TargetOperations with the Pass they will need
  5. Load the Compositor, and assign it to a RenderContext

And with all of that, the rendering logic will be totally overriden by the behaviour specified. Which concludes this tutorial !